翻訳と辞書
Words near each other
・ "O" Is for Outlaw
・ "O"-Jung.Ban.Hap.
・ "Ode-to-Napoleon" hexachord
・ "Oh Yeah!" Live
・ "Our Contemporary" regional art exhibition (Leningrad, 1975)
・ "P" Is for Peril
・ "Pimpernel" Smith
・ "Polish death camp" controversy
・ "Pro knigi" ("About books")
・ "Prosopa" Greek Television Awards
・ "Pussy Cats" Starring the Walkmen
・ "Q" Is for Quarry
・ "R" Is for Ricochet
・ "R" The King (2016 film)
・ "Rags" Ragland
・ ! (album)
・ ! (disambiguation)
・ !!
・ !!!
・ !!! (album)
・ !!Destroy-Oh-Boy!!
・ !Action Pact!
・ !Arriba! La Pachanga
・ !Hero
・ !Hero (album)
・ !Kung language
・ !Oka Tokat
・ !PAUS3
・ !T.O.O.H.!
・ !Women Art Revolution


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

stochastic approximation : ウィキペディア英語版
stochastic approximation

Stochastic approximation methods are a family of iterative stochastic optimization algorithms that attempt to find zeroes or extrema of functions which cannot be computed directly, but only estimated via noisy observations.
Mathematically, this refers to solving:
: \min_\; f(x) = \min_\mathbb E()
where the objective is to find the parameter x \in \Theta, which minimizes f(x) for some unknown random variable, \xi . Denoting d as the dimension of the parameter x , we can assume that while the domain \Theta \subset \mathbb R^d is known, the objective function, f(x), cannot be computed exactly, but instead approximated via simulation. This can be intuitively explained as follows. f(x) is the original function we want to minimize. However, due to noise, f(x) can not be evaluated exactly. This situation is modeled by the function F(x,\xi), where \xi represents the noise and is a random variable. Since \xi is a random variable, so is the value of F(x,\xi). The objective is then to minimize f(x), but through evaluating F(x,\xi). A reasonable way to do this is to minimize the expectation of F(x,\xi), i.e., \mathbb E().
The first, and prototypical, algorithms of this kind are the Robbins-Monro and Kiefer-Wolfowitz algorithms introduced respectively in 1951 and 1952.
==Robbins–Monro algorithm==
The Robbins–Monro algorithm, introduced in 1951 by Herbert Robbins and Sutton Monro, presented a methodology for solving a root finding problem, where the function is represented as an expected value. Assume that we have a function M(x), and a constant \alpha, such that the equation M(x) = \alpha has a unique root at x=\theta. It is assumed that while we cannot directly observe the function M(x), we can instead obtain measurements of the random variable N(x) where \mathbb E() = M(x). The structure of the algorithm is to then generate iterates of the form:
::x_-x_n=a_n(\alpha-N(x_n))
Here, a_1, a_2, \dots is a sequence of positive step sizes. Robbins and Monro proved 〔, Theorem 2 that x_n converges in L^2 (and hence also in probability) to \theta provided that:
* N(x) is uniformly bounded,
* M(x) is nondecreasing,
* M'(\theta) exists and is positive, and
* The sequence a_n satisfies the following requirements:
:: \qquad \sum^_a_n = \infty \quad \mbox \quad \sum^_a^2_n < \infty \quad
A particular sequence of steps which satisfy these conditions, and was suggested by Robbins–Monro, have the form: a_n=a/n, for a > 0 . Other series are possible but in order to average out the noise in N(x), the above condition must be met.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「stochastic approximation」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.